1,824 research outputs found

    An Analysis of Scale Invariance in Object Detection - SNIP

    Full text link
    An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7% and an ensemble of 3 networks obtains an mAP of 48.3%. We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at \url{http://bit.ly/2yXVg4c}.Comment: CVPR 2018, camera ready versio

    The Highest Price Ever: The Great NYSE Seat Sale of 1928–1929 and Capacity Constraints

    Get PDF
    During the 1920s the New York Stock Exchange's position as the dominant American exchange was eroding. Costs to customers, measured as bid-ask spreads, spiked when surging inflows of orders collided with the constraint created by a fixed number of brokers. The NYSE's management proposed and the membership approved a 25 percent increase in the number of seats by issuing a quarter-seat dividend to all members. An event study reveals that the aggregate value of the NYSE rose in anticipation of improved competitiveness. These expectations were justified as bid-ask spreads became less sensitive to peak volume days

    Fast-AT: Fast Automatic Thumbnail Generation using Deep Neural Networks

    Full text link
    Fast-AT is an automatic thumbnail generation system based on deep neural networks. It is a fully-convolutional deep neural network, which learns specific filters for thumbnails of different sizes and aspect ratios. During inference, the appropriate filter is selected depending on the dimensions of the target thumbnail. Unlike most previous work, Fast-AT does not utilize saliency but addresses the problem directly. In addition, it eliminates the need to conduct region search on the saliency map. The model generalizes to thumbnails of different sizes including those with extreme aspect ratios and can generate thumbnails in real time. A data set of more than 70,000 thumbnail annotations was collected to train Fast-AT. We show competitive results in comparison to existing techniques

    Discussant Comments

    Get PDF
    Discussant Comment

    The University, the Community, and Race

    Get PDF
    The University, the Community, and Rac
    • …
    corecore